Goto

Collaborating Authors

 government policy


AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies

Zeng, Yi, Klyman, Kevin, Zhou, Andy, Yang, Yu, Pan, Minzhou, Jia, Ruoxi, Song, Dawn, Liang, Percy, Li, Bo

arXiv.org Artificial Intelligence

We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories, organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.


Carbon Market Simulation with Adaptive Mechanism Design

Wang, Han, Li, Wenhao, Zha, Hongyuan, Wang, Baoxiang

arXiv.org Artificial Intelligence

A carbon market is a market-based tool that incentivizes economic agents to align individual profits with the global utility, i.e., reducing carbon emissions to tackle climate change. Cap and trade stands as a critical principle based on allocating and trading carbon allowances (carbon emission credit), enabling economic agents to follow planned emissions and penalizing excess emissions. A central authority is responsible for introducing and allocating those allowances in cap and trade. However, the complexity of carbon market dynamics makes accurate simulation intractable, which in turn hinders the design of effective allocation strategies. To address this, we propose an adaptive mechanism design framework, simulating the market using hierarchical, model-free multi-agent reinforcement learning (MARL). Government agents allocate carbon credits, while enterprises engage in economic activities and carbon trading. This framework illustrates agents' behavior comprehensively. Numerical results show MARL enables government agents to balance productivity, equality, and carbon emissions. Our project is available at https://github.com/xwanghan/Carbon-Simulator.


The Rise of Automation – How It Is Impacting the Job Market – Towards AI

#artificialintelligence

Originally published on Towards AI. Machines replacing humans in the workplace have been a constant source of fear since the Industrial Revolution, and it has become a more prominent topic of discussion in recent decades with the rise of automation. Automation has been around for centuries, and its use has increased significantly in recent years across many industries, including manufacturing, transportation, healthcare, and retail. The implementation of automation can bring many benefits, such as increased productivity, efficiency, and improved quality and safety. However, it also poses challenges and potential negative impacts on the economy and job market.


AI's Future Doesn't Have to Be Dystopian - Boston Review

#artificialintelligence

Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society. Worse still, the weakening of democracy makes formulating solutions to the adverse labor market and distributional effects of AI much more difficult. These dangers have only multiplied during the COVID-19 crisis. Lockdowns, social distancing, and workers' vulnerability to the virus have given an additional boost to the drive for automation, with the majority of U.S. businesses reporting plans for more automation.


Letting AI hold the public purse?!

#artificialintelligence

Each year, national and local governments determine the relative priorities of services to allocate funding. How would AI spend the cash? What makes more sense for a vibrant society -- spending on economic development or growth, spending on education, international development, social care, libraries? What would ideal balance look like? If this question sounds familiar, it's not the first time we've tried to apply artificial intelligence (AI) to making this decision.


AI's Future Doesn't Have to Be Dystopian

#artificialintelligence

The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society.


Salary Disputes

Communications of the ACM

In Moshe Vardi's September 2020 column, "Where Have All the Domestic Graduate Students Gone?," the short but woefully incomplete answer is that the wage premium for a Ph.D. in CS is simply too small to justify foregoing five years of industry-level salary. But why is that the case? Part of the answer may be due to government policy discussed back in 1989, when an NSF document addressed the "problem" of Ph.D. salaries being too high, and suggested as a remedy increasing the pool of international students (https://bit.ly/2IuFZl7). This would swell the labor market, holding down wage growth. "A growing influx of foreign Ph.D.'s into U.S. labor markets will hold down the level of Ph.D. salaries to the extent that foreign students are attracted to U.S. doctoral programs as a way of immigrating to the U.S." But the domestic students would find that the resulting wage suppression would make Ph.D. study a bad choice: "... a key issue [for the domestic students] is pay. The relatively modest salary premium for acquiring [a] Ph.D. may be too low to attract a number of able potential graduate students ... A number of them will select alternative career paths ... by choosing to acquire a'professional' degree in business or law ... For these baccalaureates, the effective premium for acquiring a Ph.D. may actually be negative."


Can Artificial Intelligence Save the Regulatory State?

#artificialintelligence

The Department of Justice recently sued Google for allegedly monopolizing the market for search engines. The Department's complaint alleges that Google took numerous actions well before 2010 that formed part of the claimed antitrust violations. I have no comment about the merits. What I do want to call attention to, however, are the dates: a lawsuit beginning in 2020 to try to correct the market consequences of actions that began more than 10 years ago. The revolution that some scholars call "regulating by robot" is already underway.


The Art of AI: An interview with Kai-Fu Lee

#artificialintelligence

Editor's note: A leading figure in the Chinese tech scene and in artificial-intelligence development globally, Kai-Fu Lee earned a PhD in computer science from Carnegie Mellon University in 1988 before serving in executive roles at Apple, SGI, Microsoft, and Google, where he was president of Google China. Now chairman and CEO of Sinovation Ventures in Beijing, he is the author of AI Superpowers: China, Silicon Valley, and the New World Order. Here, he discusses with Project Syndicate the global AI race, the current state of the field, and what may – and should – come next. Question: As someone who long worked for U.S. companies and now oversees a tech venture capital firm, you're deeply familiar with the world's two main settings for AI development and research. What are the trade-offs of each R&D environment?


AI experts call for 'bias bounties' to boost ethics scrutiny – Government & civil service news

#artificialintelligence

Experts from the private sector and leading research labs in the US and Europe have joined forces to create a toolkit for turning AI ethics principles into practice. The preprint paper, published last week, advocates paying people for finding risks of bias in artificial intelligence (AI) systems – adapting a model used to check the security of new computer systems, in which hackers are paid'bounties' for identifying weaknesses. The paper also proposes better linking independent third-party auditing operations and government policies to foster a market in regulatory systems, and suggests that governments increase funding for researchers in academia to verify performance claims made by industry. The 80-page paper, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, has been put together by AI specialists from 30 organisations including Google Brain, Intel, OpenAI, Stanford University and the Leverhulme Centre for the Future of Intelligence. "In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, there is a need to move beyond [ethics] principles to a focus on mechanisms for demonstrating responsible behaviour," the executive summary reads.